EN FR
EN FR


Section: New Results

Model-based Testing

Our research in Model-Based Testing (MBT) aims to extend the coverage of tests. The coverage refers to several artefacts: model, test scenario/property, and code of the program under test [60] . The test generation uses various underlying techniques such as symbolic animation of models [61] or symbolic execution of programs by means of dedicated constraints or SMT solvers, or model-checkers.

Automated Test Generation from Behavioral Models

Participants : Fabrice Bouquet, Pierre-Christophe Bué, Kalou Cabrera, Jérome Cantenot, Frédéric Dadeau, Stéphane Debricon, Elizabeta Fourneret, Jonathan Lasalle.

We have introduced an original model-based testing approach that takes a behavioural view (modelled in UML) of the system under testing and automatically generates test cases and executable test scripts according to model coverage criteria. We have extended this result to SysML specifications for validating embedded systems [24] , [57] , [56] .

In the context of software evolution, we have worked on exploiting the evolution of requirements in order to classify test sequences, and precisely target the parts of the system impacted by this evolution [49] , [50] . We have proposed to define the life cycle of a test via three test classes: (i) Regression, used to validate that unimpacted parts of the system did not change, (ii) Evolution, used to validate that impacted parts of the system correctly evolved, and (iii) Stagnation, used to validate that impacted parts of the system did actually evolve. The associated algorithms are under implementation in a dedicated prototype to be used in the SecureChange european project [62] . A link with the security model proof has been started with partners of the project in [51] that allows to generate test needs associated to security properties verified on model.

Scenario-Based Verification and Validation

Participants : Fabrice Bouquet, Kalou Cabrera, Frédéric Dadeau, Elizabeta Fourneret.

Test scenarios represent an abstract test case specification that aims at guiding the model animation in order to produce relevant test cases. Contrary to the previous section, this technique is not fully automated since it requires the user to design the scenario, in addition to the model.

We have designed a scenario based testing language for UML/OCL that can be either connected to a model animation engine [31] or to a symbolic animation engine, based on a set-theoretical constraint solver [20] . In the context of the ANR TASCCC project, we are investigating the automation of test generation from Security Functional Requirements (SFR), as defined in the Common Criteria terminology. SFRs represent security functions that have to be assessed during the validation phase of security products (in the project, the Global Platform, an operating system for latest-generation smart cards). To achieve that, we are working on the definition of description patterns for security properties, to which a given set of SFRs can be related. These properties are used to automatically generate test scenarios that produce model based test cases. The traceability, ensured all along the testing process, makes it possible to provide evidences of the coverage of the SFR by the tests, required by the Common Criteria to reach the highest Evaluation Assurance Levels. We have proposed a dedicated formalism to express test properties [32] . A test property is first translated into a finite state automaton whose coverage by a given test suite is then measured. This makes it possible to evaluate the relevance of the test suite w.r.t. a given property.

In the context of the SecureChange project, we also investigate the evolution of test scenarios. As the system evolves, the model evolves, and the associated test scenarios may also evolve. We are currently extending the test generation and management of system evolutions to ensure the preservation of the security.

Mutation-based Testing of Security Protocols

Participants : Frédéric Dadeau, Pierre-Cyrille Héam.

Verification of security protocols models is an important issue. Nevertheless, the verification reasons on a model of the protocol, and does not consider its concrete implementation. While representing a safe model, the protocol may be incorrectly implemented, leading to security flaws when it is deployed. We have proposed a model-based penetration testing approach for security protocols [44] . This technique relies on the use of mutations of an original protocol, proved to be correct, for injecting realistic errors that may occur during the protocol implementation (e.g. re-use of existing keys, partial checking of received messages, incorrect formatting of sent messages, use of exponential/xor encryption, etc.). Mutations that lead to security flaws are used to build test cases, which are defined as a sequence of messages representing the behavior of the intruder. We have applied our technique on protocols designed in HLPSL, and implemented a protocol mutation tool that performs the mutations. The mutants are then analyzed by the CL-Atse  [90] front-end of the AVISPA toolset  [74] . Experiments show the relevance of the proposed mutation operators and the efficiency of the CL-Atse tool to conclude on the vulnerability of a protocol and produce an attack trace that can be used as a test case for implementations.

Code-related Test Generation and Static Analysis

Participants : Alain Giorgetti, Frédéric Dadeau, Ivan Enderlin.

In 2011 we have enriched with program slicing [33] an original combination of static analysis and structural program testing for C program debugging presented in 2010, implemented in a prototype called SANTE (Static ANalysis and TEsting). The method first calls a static value analysis which generates alarms when it cannot guarantee the absence of run-time errors. In order to simplify test generation, the method then reduces the program by program slicing and produces one or many simpler programs, while preserving a subset of the alarms. Finally the method performs an alarm-guided test generation to analyze the simplified program(s), in order to confirm or reject alarms. Experiments on real examples have shown that the verification is faster when reducing the code with program slicing. Moreover, the simplified program(s) makes the detected errors and the remaining alarms easier to analyze.

We have designed a grey-box testing and analysis tool [45] for Java programs possibly annotated by JML annotations. This tool uses a set-theoretical constraint representation of the Java code of class methods. It provides an efficient means for (i) generating structural test cases, satisfying a given code-coverage criterion (all-nodes, all-transitions, all-k-paths) and taking into account the JML annotations associated to the method, and (ii) performing static analysis on the Java code, either to detect potential runtime errors (null pointers dereferencing, division by zero, etc.) or to detect non-conformances between the Java program and its JML specifications (invariant, internal precondition or postcondition violation).

We have designed a new annotation language for PHP, named PRASPEL [48] for PHP Realistic Annotation SPEcification Language. This language relies on realistic domains which serve two purposes. First, they assign to a data a domain that is supposed to be specific w.r.t. a context in which it is employed. Second, they provide two features that are used for test generation: (i) samplability makes it possible to automatically generate a value that belongs to the realistic domain so as to generate test data, (ii) predicability makes it possible to check if the value belongs to a realistic domain. This approach is tool-supported in a dedicated framework for PHP which makes it possible to produce unit test cases using random data generators, execute the test cases on an instrumented implementation, and decide the conformance of the code w.r.t. the annotations by runtime assertion checking.

Random Testing

Participant : Pierre-Cyrille Héam.

The random testing paradigm represents a quite simple and tractable software assessment method for various testing approaches. When doing random testing, the main qualities required for the random sampler are that random choices must be objective and independent of tester choices or convictions: a solution is to ask for uniform random generators.

In [86] a method is proposed for drawing paths in finite graphs uniformly and it is showed how to use these techniques in a control flow graph based testing approach of C programs. Nevertheless, a finite graph often represents a strong abstraction of the system under test, and many abstract tests generated by the approach may be impossible to play on the implementation. In [53] , we propose a new approach, extending previous work, to manage stack-call during the random test generation while preserving uniformity.

When doing random testing on inputs, the algorithm has to be efficient enough to allow the generation of a huge quantity of data. Moreover every programming language provides good uniform random generators (or pseudo-random to be more precise) for numbers. However, the question is more complex for non-numerical data, such as tree data structures, logical formulas, graphs, etc. In [54] , we present the Seed prototype that uniformly generates recursive data structures satisfying a given grammar-like specification. The tool is easy to use, uniform and generation is uniform. Moreover, it manages some equational equivalences on data structures to shape the distribution.